8 research outputs found

    Counterfactual Causality for Reachability and Safety based on Distance Functions

    Full text link
    Investigations of causality in operational systems aim at providing human-understandable explanations of why a system behaves as it does. There is, in particular, a demand to explain what went wrong on a given counterexample execution that shows that a system does not satisfy a given specification. To this end, this paper investigates a notion of counterfactual causality in transition systems based on Stalnaker's and Lewis' semantics of counterfactuals in terms of most similar possible worlds and introduces a novel corresponding notion of counterfactual causality in two-player games. Using distance functions between paths in transition systems, this notion defines whether reaching a certain set of states is a cause for the violation of a reachability or safety property. Similarly, using distance functions between memoryless strategies in reachability and safety games, it is defined whether reaching a set of states is a cause for the fact that a given strategy for the player under investigation is losing. The contribution of the paper is two-fold: In transition systems, it is shown that counterfactual causality can be checked in polynomial time for three prominent distance functions between paths. In two-player games, the introduced notion of counterfactual causality is shown to be checkable in polynomial time for two natural distance functions between memoryless strategies. Further, a notion of explanation that can be extracted from a counterfactual cause and that pinpoints changes to be made to the given strategy in order to transform it into a winning strategy is defined. For the two distance functions under consideration, the problem to decide whether such an explanation imposes only minimal necessary changes to the given strategy with respect to the used distance function turns out to be coNP-complete and not to be solvable in polynomial time if P is not equal to NP, respectively.Comment: This is the extended version of a paper accepted for publication at GandALF 202

    Playing Stochastically in Weighted Timed Games to Emulate Memory

    Get PDF
    Weighted timed games are two-player zero-sum games played in a timed automaton equipped with integer weights. We consider optimal reachability objectives, in which one of the players, that we call Min, wants to reach a target location while minimising the cumulated weight. While knowing if Min has a strategy to guarantee a value lower than a given threshold is known to be undecidable (with two or more clocks), several conditions, one of them being the divergence, have been given to recover decidability. In such weighted timed games (like in untimed weighted games in the presence of negative weights), Min may need finite memory to play (close to) optimally. This is thus tempting to try to emulate this finite memory with other strategic capabilities. In this work, we allow the players to use stochastic decisions, both in the choice of transitions and of timing delays. We give for the first time a definition of the expected value in weighted timed games, overcoming several theoretical challenges. We then show that, in divergent weighted timed games, the stochastic value is indeed equal to the classical (deterministic) value, thus proving that Min can guarantee the same value while only using stochastic choices, and no memory

    Playing Stochastically in Weighted Timed Games to Emulate Memory

    Full text link
    Weighted timed games are two-player zero-sum games played in a timed automaton equipped with integer weights. We consider optimal reachability objectives, in which one of the players, that we call Min, wants to reach a target location while minimising the cumulated weight. While knowing if Min has a strategy to guarantee a value lower than a given threshold is known to be undecidable (with two or more clocks), several conditions, one of them being the divergence, have been given to recover decidability. In such weighted timed games (like in untimed weighted games in the presence of negative weights), Min may need finite memory to play (close to) optimally. This is thus tempting to try to emulate this finite memory with other strategic capabilities. In this work, we allow the players to use stochastic decisions, both in the choice of transitions and of timing delays. We give for the first time a definition of the expected value in weighted timed games, overcoming several theoretical challenges. We then show that, in divergent weighted timed games, the stochastic value is indeed equal to the classical (deterministic) value, thus proving that Min can guarantee the same value while only using stochastic choices, and no memory

    Reaching Your Goal Optimally by Playing at Random with No Memory

    Get PDF
    Shortest-path games are two-player zero-sum games played on a graph equipped with integer weights. One player, that we call Min, wants to reach a target set of states while minimising the total weight, and the other one has an antagonistic objective. This combination of a qualitative reachability objective and a quantitative total-payoff objective is one of the simplest settings where Min needs memory (pseudo-polynomial in the weights) to play optimally. In this article, we aim at studying a tradeoff allowing Min to play at random, but using no memory. We show that Min can achieve the same optimal value in both cases. In particular, we compute a randomised memoryless ?-optimal strategy when it exists, where probabilities are parametrised by ?. We also show that for some games, no optimal randomised strategies exist. We then characterise, and decide in polynomial time, the class of games admitting an optimal randomised memoryless strategy

    Decidability of One-Clock Weighted Timed Games with Arbitrary Weights

    Full text link
    Weighted Timed Games (WTG for short) are the most widely used model to describe controller synthesis problems involving real-time issues. Unfortunately, they are notoriously difficult, and undecidable in general. As a consequence, one-clock WTGs have attracted a lot of attention, especially because they are known to be decidable when only non-negative weights are allowed. However, when arbitrary weights are considered, despite several recent works, their decidability status was still unknown. In this paper, we solve this problem positively and show that the value function can be computed in exponential time (if weights are encoded in unary)

    Decidability of One-Clock Weighted Timed Games with Arbitrary Weights

    Get PDF
    Weighted Timed Games (WTG for short) are the most widely used model to describe controller synthesis problems involving real-time issues. Unfortunately, they are notoriously difficult, and undecidable in general. As a consequence, one-clock WTG has attracted a lot of attention, especially because they are known to be decidable when only non-negative weights are allowed. However, when arbitrary weights are considered, despite several recent works, their decidability status was still unknown. In this paper, we solve this problem positively and show that the value function can be computed in exponential time (if weights are encoded in unary)

    Performance Evaluation of Metro Regulations Using Probabilistic Model-checking

    No full text
    International audienceMetros are subject to unexpected delays due to weather conditions , incidents, passenger misconduct, etc. To recover from delays and avoid their propagation to the whole network, metro operators use regulation algorithms that adapt speeds and departure dates of trains. Regulation algorithms are ad-hoc tools that are tuned to cope with characteristics of tracks, rolling stock, and passengers habits. However, there is no universal optimal regulation adapted in any environment. So, performance of a regulation technique must be evaluated before its integration in the network. In this work, we use probabilistic model-checking to evaluate the performance of regulation algorithms in simple metro lines. We model the moves of trains and random delays with Markov decision processes, and regulation as a controller that forces a decision depending on its partial knowledge of the state of the system. We then use the probabilistic model checker PRISM to evaluate performance of regulation: We compute the probability to reach a stable situation from an unstable one in less than d time units, letting d vary in a large enough time interval. This approach is applied on a case study, the metro network of Glasgow
    corecore